Goto

Collaborating Authors

 distance encoding


Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning

Neural Information Processing Systems

Learning representations of sets of nodes in a graph is crucial for applications ranging from node-role discovery to link prediction and molecule classification. Graph Neural Networks (GNNs) have achieved great success in graph representation learning. However, expressive power of GNNs is limited by the 1-Weisfeiler-Lehman (WL) test and thus GNNs generate identical representations for graph substructures that may in fact be very different. More powerful GNNs, proposed recently by mimicking higher-order-WL tests, only focus on representing entire graphs and they are computationally inefficient as they cannot utilize sparsity of the underlying graph. Here we propose and mathematically analyze a general class of structure-related features, termed Distance Encoding (DE).


Review for NeurIPS paper: Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning

Neural Information Processing Systems

Summary and Contributions: [edit: see "additional feedback" section for response to author rebuttal] This paper considers the problem of learning representations of specific subgraphs of a larger graph, in such a way that subgraphs of non-isomorphic graphs should be assigned different representations. Specifically, they note that most GNNs can only distinguish things that the 1-WL test can distinguish, and propose a new method of computing graph features (called "distance encoding") to get around this restriction. The distance encoding the authors propose appears to work as follows. Suppose we want to compute a representation of a subgraph S of a graph G. Instead of running a GNN on G and then pooling across the subgraph, the subgraph S is used to modify the features of all of the nodes in G, in particular based on the distances between those nodes in G and the nodes in S (this is DEGNN). Optionally, the subgraph is also used to modify the edge structure by, conceptually, adding additional edges and/or edge features based on pairwise distances in G (this is DEAGNN).


Review for NeurIPS paper: Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning

Neural Information Processing Systems

This exciting paper introduces some interesting and novel theoretical contributions to the graph neural network literature. The authors also verified some of their theoretical findings empirically as well. This paper is worth presenting at NeurIPS with the condition that the authors will address the concerns raised by the reviewers on writing and clarity. This paper has valuable contributions in better characterizing 1-WL's power, yet it steps too large directly to using distance while skipping the discussion of those somewhat more straightforward conditioning ways (such as annotation, etc.) It seems like there is a too big jump from 1-WL directly to DE without discussing how much you gain by just doing more straightforward conditioning.


Distance Encoding: Design Provably More Powerful Neural Networks for Graph Representation Learning

Neural Information Processing Systems

Learning representations of sets of nodes in a graph is crucial for applications ranging from node-role discovery to link prediction and molecule classification. Graph Neural Networks (GNNs) have achieved great success in graph representation learning. However, expressive power of GNNs is limited by the 1-Weisfeiler-Lehman (WL) test and thus GNNs generate identical representations for graph substructures that may in fact be very different. More powerful GNNs, proposed recently by mimicking higher-order-WL tests, only focus on representing entire graphs and they are computationally inefficient as they cannot utilize sparsity of the underlying graph. Here we propose and mathematically analyze a general class of structure-related features, termed Distance Encoding (DE).


Neural combinatorial optimization beyond the TSP: Existing architectures under-represent graph structure

Boffa, Matteo, Houidi, Zied Ben, Krolikowski, Jonatan, Rossi, Dario

arXiv.org Artificial Intelligence

Recent years have witnessed the promise that reinforcement learning, coupled with Graph Neural Network (GNN) architectures, could learn to solve hard combinatorial optimization problems: given raw input data and an evaluator to guide the process, the idea is to automatically learn a policy able to return feasible and high-quality outputs. Recent work have shown promising results but the latter were mainly evaluated on the travelling salesman problem (TSP) and similar abstract variants such as Split Delivery Vehicle Routing Problem (SDVRP). In this paper, we analyze how and whether recent neural architectures can be applied to graph problems of practical importance. We thus set out to systematically "transfer" these architectures to the Power and Channel Allocation Problem (PCAP), which has practical relevance for, e.g., radio resource allocation in wireless networks. Our experimental results suggest that existing architectures (i) are still incapable of capturing graph structural features and (ii) are not suitable for problems where the actions on the graph change the graph attributes. On a positive note, we show that augmenting the structural representation of problems with Distance Encoding is a promising step towards the still-ambitious goal of learning multi-purpose autonomous solvers.